Goto

Collaborating Authors

 chaudhuri & dasgupta




Supplementary Material: Extrapolation Towards Imaginary 0-Nearest Neighbour and Its Improved Convergence Rate A Related works Györfi (1981) is the first work that proves the convergence rate O (n

Neural Information Processing Systems

In this section, we describe Nadaraya-Watson (NW) classifier, Local Polynomial (LP) classifier and their convergence rates (Audibert & Tsybakov, 2007). Proof of Corollary 2. Proposition 6 immediately proves the assertion. We basically follow the proof of Chaudhuri & Dasgupta (2014) Theorem 4(b). In Section G.1, we first define symbols In Section G.2, we describe the sketch of the proof and main differences between our proof and that of Section G.3 shows the main body of the Proof, by utilizing several Lemmas listed in A minimum radius whose measure of the ball is larger than t > 0, i.e., r Chaudhuri & Dasgupta (2014) Lemma 21) Then, the assertion is proved. See the following Section G.4 for Lemma 1-7 used in this proof.



A Two-Stage Active Learning Algorithm for $k$-Nearest Neighbors

Rittler, Nick, Chaudhuri, Kamalika

arXiv.org Artificial Intelligence

$k$-nearest neighbor classification is a popular non-parametric method because of desirable properties like automatic adaption to distributional scale changes. Unfortunately, it has thus far proved difficult to design active learning strategies for the training of local voting-based classifiers that naturally retain these desirable properties, and hence active learning strategies for $k$-nearest neighbor classification have been conspicuously missing from the literature. In this work, we introduce a simple and intuitive active learning algorithm for the training of $k$-nearest neighbor classifiers, the first in the literature which retains the concept of the $k$-nearest neighbor vote at prediction time. We provide consistency guarantees for a modified $k$-nearest neighbors classifier trained on samples acquired via our scheme, and show that when the conditional probability function $\mathbb{P}(Y=y|X=x)$ is sufficiently smooth and the Tsybakov noise condition holds, our actively trained classifiers converge to the Bayes optimal classifier at a faster asymptotic rate than passively trained $k$-nearest neighbor classifiers.


One-Nearest-Neighbor Search is All You Need for Minimax Optimal Regression and Classification

Ryu, J. Jon, Kim, Young-Han

arXiv.org Artificial Intelligence

Recently, Qiao, Duan, and Cheng~(2019) proposed a distributed nearest-neighbor classification method, in which a massive dataset is split into smaller groups, each processed with a $k$-nearest-neighbor classifier, and the final class label is predicted by a majority vote among these groupwise class labels. This paper shows that the distributed algorithm with $k=1$ over a sufficiently large number of groups attains a minimax optimal error rate up to a multiplicative logarithmic factor under some regularity conditions, for both regression and classification problems. Roughly speaking, distributed 1-nearest-neighbor rules with $M$ groups has a performance comparable to standard $\Theta(M)$-nearest-neighbor rules. In the analysis, alternative rules with a refined aggregation method are proposed and shown to attain exact minimax optimal rates.


Extrapolation Towards Imaginary $0$-Nearest Neighbour and Its Improved Convergence Rate

Okuno, Akifumi, Shimodaira, Hidetoshi

arXiv.org Machine Learning

$k$-nearest neighbour ($k$-NN) is one of the simplest and most widely-used methods for supervised classification, that predicts a query's label by taking weighted ratio of observed labels of $k$ objects nearest to the query. The weights and the parameter $k \in \mathbb{N}$ regulate its bias-variance trade-off, and the trade-off implicitly affects the convergence rate of the excess risk for the $k$-NN classifier; several existing studies considered selecting optimal $k$ and weights to obtain faster convergence rate. Whereas $k$-NN with non-negative weights has been developed widely, it was proved that negative weights are essential for eradicating the bias terms and attaining optimal convergence rate. However, computation of the optimal weights requires solving entangled equations. Thus, other simpler approaches that can find optimal real-valued weights are appreciated in practice. In this paper, we propose multiscale $k$-NN (MS-$k$-NN), that extrapolates unweighted $k$-NN estimators from several $k \ge 1$ values to $k=0$, thus giving an imaginary 0-NN estimator. MS-$k$-NN implicitly corresponds to an adaptive method for finding favorable real-valued weights, and we theoretically prove that the MS-$k$-NN attains the improved rate, that coincides with the existing optimal rate under some conditions.


Benefit of Interpolation in Nearest Neighbor Algorithms

Xing, Yue, Song, Qifan, Cheng, Guang

arXiv.org Machine Learning

The over-parameterized models attract much attention in the era of data science and deep learning. It is empirically observed that although these models, e.g. deep neural networks, over-fit the training data, they can still achieve small testing error, and sometimes even {\em outperform} traditional algorithms which are designed to avoid over-fitting. The major goal of this work is to sharply quantify the benefit of data interpolation in the context of nearest neighbors (NN) algorithm. Specifically, we consider a class of interpolated weighting schemes and then carefully characterize their asymptotic performances. Our analysis reveals a U-shaped performance curve with respect to the level of data interpolation, and proves that a mild degree of data interpolation {\em strictly} improves the prediction accuracy and statistical stability over those of the (un-interpolated) optimal $k$NN algorithm. This theoretically justifies (predicts) the existence of the second U-shaped curve in the recently discovered double descent phenomenon. Note that our goal in this study is not to promote the use of interpolated-NN method, but to obtain theoretical insights on data interpolation inspired by the aforementioned phenomenon.


Pruning nearest neighbor cluster trees

Kpotufe, Samory, von Luxburg, Ulrike

arXiv.org Machine Learning

Nearest neighbor (k-NN) graphs are widely used in machine learning and data mining applications, and our aim is to better understand what they reveal about the cluster structure of the unknown underlying distribution of points. Moreover, is it possible to identify spurious structures that might arise due to sampling variability? Our first contribution is a statistical analysis that reveals how certain subgraphs of a k-NN graph form a consistent estimator of the cluster tree of the underlying distribution of points. Our second and perhaps most important contribution is the following finite sample guarantee. We carefully work out the tradeoff between aggressive and conservative pruning and are able to guarantee the removal of all spurious cluster structures at all levels of the tree while at the same time guaranteeing the recovery of salient clusters. This is the first such finite sample result in the context of clustering.